平衡机器人(Ballbot)是测试平衡控制器有效性的好平台。考虑到平衡控制,已经广泛使用了基于模型的反馈控制方法。但是,接触和碰撞很难建模,并且通常导致平衡控制失败,尤其是当球机器人倾斜的角度时。为了探索球机器人的最大初始倾斜角,平衡控制被解释为使用增强学习(RL)的恢复任务。 RL是难以建模的系统的强大技术,因为它允许代理通过与环境进行交互来学习策略。在本文中,通过将常规反馈控制器与RL方法相结合,提出了化合物控制器。我们通过训练代理成功执行涉及联系和碰撞的恢复任务来显示化合物控制器的有效性。仿真结果表明,与常规基于模型的控制器相比,使用化合物控制器可以在更大的初始倾斜角度下保持平衡。
translated by 谷歌翻译
为了更好地利用搜索日志和建模用户的行为模式,提出了许多点击模型来提取用户的隐式交互反馈。大多数传统点击模型都是基于概率图形模型(PGM)框架,该框架需要手动设计的依赖项,并且可能会过度简化用户行为。最近,提出了基于神经网络的方法来通过增强表达能力并允许灵活的依赖性来提高用户行为的预测准确性。但是,他们仍然遭受数据稀疏性和冷启动问题的困扰。在本文中,我们提出了一个新颖的图形增强点击模型(GraphCM),用于Web搜索。首先,我们将每个查询或文档视为顶点,并分别针对查询和文档提出新颖的均匀图构造方法,以完全利用会议内和会议间信息,以解决稀疏性和冷启动问题。其次,在考试假设之后,我们分别对吸引力估计量和检查预测值进行了建模,以输出吸引力得分和检查概率,在该分数中,应用图形神经网络和邻居相互作用技术用于提取在预构建的同质图中编码的辅助信息。最后,我们将组合功能应用于将考试概率和吸引力得分整合到点击预测中。在三个现实世界会话数据集上进行的广泛实验表明,GraphCM不仅胜过了最先进的模型,而且还可以在解决数据稀疏性和冷启动问题方面取得卓越的性能。
translated by 谷歌翻译
为了根据用户的隐式交互反馈提供点击模拟或相关性估计,在近年来,单击模型进行了很多研究。大多数点击模型都集中在用户行为上,指向单个列表。但是,随着用户界面设计(UI)设计的开发,结果页面上显示的项目的布局往往是多块(即多列表)样式而不是单个列表,这需要不同的假设来建模用户行为模型更精确地。存在桌面上下文中多块页面的单击模型,但是由于不同的互动方式,结果类型,尤其是多块演示样式,因此无法直接应用于移动方案。特别是,多块移动页面通常可以分解为基本垂直块和水平块的交织,从而导致典型的F形式。为了减轻桌面和移动上下文之间的多块页面上的差距,我们进行了用户吸引人的学习研究,并确定用户的顺序浏览,block skip和F-Shape页面上的比较模式。这些发现导致了新型的F形点击模型(FSCM)的设计,该模型是多块移动页面的一般解决方案。首先,我们为每个页面构建一个有向的无环图(DAG),每个项目都被视为顶点,每个边缘表示用户可能的检查流。其次,我们建议分别对用户的顺序(顺序浏览,块跳过)和非序列(比较)行为提出DAG结构的GRU和比较模块。最后,我们将GRU状态和比较模式结合在一起,以执行用户点击预测。与基线模型相比,大型现实世界数据集上的实验验证了FSCM对用户行为预测的有效性。
translated by 谷歌翻译
域适应(DA)最近在医学影像社区提出了强烈的兴趣。虽然已经提出了大量DA技术进行了用于图像分割,但大多数这些技术已经在私有数据集或小公共可用数据集上验证。此外,这些数据集主要解决了单级问题。为了解决这些限制,与第24届医学图像计算和计算机辅助干预(Miccai 2021)结合第24届国际会议组织交叉模态域适应(Crossmoda)挑战。 Crossmoda是无监督跨型号DA的第一个大型和多级基准。挑战的目标是分割参与前庭施瓦新瘤(VS)的后续和治疗规划的两个关键脑结构:VS和Cochleas。目前,使用对比度增强的T1(CET1)MRI进行VS患者的诊断和监测。然而,使用诸如高分辨率T2(HRT2)MRI的非对比度序列越来越感兴趣。因此,我们创建了一个无人监督的跨模型分段基准。训练集提供注释CET1(n = 105)和未配对的非注释的HRT2(n = 105)。目的是在测试集中提供的HRT2上自动对HRT2进行单侧VS和双侧耳蜗分割(n = 137)。共有16支球队提交了评估阶段的算法。顶级履行团队达成的表现水平非常高(最佳中位数骰子 - vs:88.4%; Cochleas:85.7%)并接近完全监督(中位数骰子 - vs:92.5%;耳蜗:87.7%)。所有顶级执行方法都使用图像到图像转换方法将源域图像转换为伪目标域图像。然后使用这些生成的图像和为源图像提供的手动注释进行培训分割网络。
translated by 谷歌翻译
图像综合和图像识别已经见证了显着的进展,但通常以计算昂贵的训练和推断为代价。学习轻量级又表达深度模型已成为一个重要而有趣的方向。本文提出了略微展开的展开构建模块(SLIM),促进了图像合成模型的略微学习,以及相同层变体(称为纤细TOO)作为图像识别的众所周知的RENEXT的替代品更强。在SLIM中,输入特征图首先将多个组(例如,4)。然后转换为潜在风格的向量(通过通道 - 明智地注意)和潜在的空间掩模(通过空间注意)。学习的潜在掩码和潜在风格向量被聚合以调制目标特征映射。对于生成的学习,纤细地建立在最近提出的轻质生成的对抗网络(即,Fastgans)上,该网络展示了跳过层励磁(SLE)模块。对于少量图像综合任务,所提出的纤细可以实现比SLE工作和其他相关方法更好的性能。对于单次图像综合任务,它显示比现有技术(例如初版)保留图像结构的更强能力。对于图像分类任务,所提出的纤细被用作Resnet中的卷积层的替代品(导致Resnext的模型),并在MageNET-1000数据集中实现更好的精度,模型复杂性显着更小
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
Image Virtual try-on aims at replacing the cloth on a personal image with a garment image (in-shop clothes), which has attracted increasing attention from the multimedia and computer vision communities. Prior methods successfully preserve the character of clothing images, however, occlusion remains a pernicious effect for realistic virtual try-on. In this work, we first present a comprehensive analysis of the occlusions and categorize them into two aspects: i) Inherent-Occlusion: the ghost of the former cloth still exists in the try-on image; ii) Acquired-Occlusion: the target cloth warps to the unreasonable body part. Based on the in-depth analysis, we find that the occlusions can be simulated by a novel semantically-guided mixup module, which can generate semantic-specific occluded images that work together with the try-on images to facilitate training a de-occlusion try-on (DOC-VTON) framework. Specifically, DOC-VTON first conducts a sharpened semantic parsing on the try-on person. Aided by semantics guidance and pose prior, various complexities of texture are selectively blending with human parts in a copy-and-paste manner. Then, the Generative Module (GM) is utilized to take charge of synthesizing the final try-on image and learning to de-occlusion jointly. In comparison to the state-of-the-art methods, DOC-VTON achieves better perceptual quality by reducing occlusion effects.
translated by 谷歌翻译
Panoptic Part Segmentation (PPS) unifies panoptic segmentation and part segmentation into one task. Previous works utilize separated approaches to handle thing, stuff, and part predictions without shared computation and task association. We aim to unify these tasks at the architectural level, designing the first end-to-end unified framework named Panoptic-PartFormer. Moreover, we find the previous metric PartPQ biases to PQ. To handle both issues, we make the following contributions: Firstly, we design a meta-architecture that decouples part feature and things/stuff feature, respectively. We model things, stuff, and parts as object queries and directly learn to optimize all three forms of prediction as a unified mask prediction and classification problem. We term our model as Panoptic-PartFormer. Secondly, we propose a new metric Part-Whole Quality (PWQ) to better measure such task from both pixel-region and part-whole perspectives. It can also decouple the error for part segmentation and panoptic segmentation. Thirdly, inspired by Mask2Former, based on our meta-architecture, we propose Panoptic-PartFormer++ and design a new part-whole cross attention scheme to further boost part segmentation qualities. We design a new part-whole interaction method using masked cross attention. Finally, the extensive ablation studies and analysis demonstrate the effectiveness of both Panoptic-PartFormer and Panoptic-PartFormer++. Compared with previous Panoptic-PartFormer, our Panoptic-PartFormer++ achieves 2% PartPQ and 3% PWQ improvements on the Cityscapes PPS dataset and 5% PartPQ on the Pascal Context PPS dataset. On both datasets, Panoptic-PartFormer++ achieves new state-of-the-art results with a significant cost drop of 70% on GFlops and 50% on parameters. Our models can serve as a strong baseline and aid future research in PPS. Code will be available.
translated by 谷歌翻译
In recent years, the Transformer architecture has shown its superiority in the video-based person re-identification task. Inspired by video representation learning, these methods mainly focus on designing modules to extract informative spatial and temporal features. However, they are still limited in extracting local attributes and global identity information, which are critical for the person re-identification task. In this paper, we propose a novel Multi-Stage Spatial-Temporal Aggregation Transformer (MSTAT) with two novel designed proxy embedding modules to address the above issue. Specifically, MSTAT consists of three stages to encode the attribute-associated, the identity-associated, and the attribute-identity-associated information from the video clips, respectively, achieving the holistic perception of the input person. We combine the outputs of all the stages for the final identification. In practice, to save the computational cost, the Spatial-Temporal Aggregation (STA) modules are first adopted in each stage to conduct the self-attention operations along the spatial and temporal dimensions separately. We further introduce the Attribute-Aware and Identity-Aware Proxy embedding modules (AAP and IAP) to extract the informative and discriminative feature representations at different stages. All of them are realized by employing newly designed self-attention operations with specific meanings. Moreover, temporal patch shuffling is also introduced to further improve the robustness of the model. Extensive experimental results demonstrate the effectiveness of the proposed modules in extracting the informative and discriminative information from the videos, and illustrate the MSTAT can achieve state-of-the-art accuracies on various standard benchmarks.
translated by 谷歌翻译